5,380 research outputs found
Randomized Quasi-Random Testing
Random testing is a fundamental testing technique that can be used to generate test cases for both hardware and software systems. Quasi-random testing was proposed as an enhancement to the cost-effectiveness of random testing: In addition to having similar computation overheads to random testing, it makes use of quasi-random sequences to generate low-discrepancy and low-dispersion test cases that help deliver high failure-detection effectiveness. Currently, few algorithms exist to generate quasi-random sequences, and these are mostly deterministic, rather than random. A previous study of quasi-random testing has examined two methods for randomizing quasi-random sequences to improve their applicability in testing. However, these randomization methods still have shortcomings - one method does not introduce much randomness to the test cases, while the other does not support incremental test case generation. In this paper, we present an innovative approach to incrementally randomizing quasi-random sequences. The test cases generated by this new approach show a high degree of randomness and evenness in distribution. We also conduct simulations and empirical studies to demonstrate the applicability and effectiveness of our approach in software testing
Recommended from our members
Influence of convection and biomass burning outflow on tropospheric chemistry over the tropical Pacific
Observations over the tropics from the Pacific Exploratory Mission-Tropics A Experiment are analyzed using a one-dimensional model with an explicit formulation for convective transport. Adopting tropical convective mass fluxes from a general circulation model (GCM) yields a large discrepancy between observed and simulated CH3I concentrations. Observations of CH3I imply the convective mass outflux to be more evenly distributed with altitude over the tropical ocean than suggested by the GCM. We find that using a uniform convective turnover lifetime of 20 days in the upper and middle troposphere enables the model to reproduce CH3I observations. The model reproduces observed concentrations of H2O2 and CH3OOH. Convective transport of CH3OOH from the lower troposphere is estimated to account for 40-80% of CH3OOH concentrations in the upper troposphere. Photolysis of CH3OOH transported by convection more than doubles the primary HOx source and increases OH concentrations and O3 production by 10-50% and 0.4 ppbv d-1, respectively, above 11 km. Its effect on the OH concentration and O3 production integrated over the tropospheric column is, however, small. The effects of pollutant import from biomass burning regions are much more dominant. Using C2H2 as a tracer, we estimate that biomass burning outflow enhances O3 concentrations, O3 production, and concentrations of NOx and OH by 60%, 45%, 75%, and 7%, respectively. The model overestimates HNO3 concentrations by about a factor of 2 above 4 km for the upper one-third quantile of C2H2 data while it generally reproduces HNO3 concentrations for the lower and middle one-third quantiles of C2H2 data. Copyright 2000 by the American Geophysical Union
Fast Wavefront Propagation (FWP) for Computing Exact Geodesic Distances on Meshes
Computing geodesic distances on triangle meshes is a fundamental problem in computational geometry and computer graphics. To date, two notable classes of algorithms, the Mitchell-Mount-Papadimitriou (MMP) algorithm and the Chen-Han (CH) algorithm, have been proposed. Although these algorithms can compute exact geodesic distances if numerical computation is exact, they are computationally expensive, which diminishes their usefulness for large-scale models and/or time-critical applications. In this paper, we propose the fast wavefront propagation (FWP) framework for improving the performance of both the MMP and CH algorithms. Unlike the original algorithms that propagate only a single window (a data structure locally encodes geodesic information) at each iteration, our method organizes windows with a bucket data structure so that it can process a large number of windows simultaneously without compromising wavefront quality. Thanks to its macro nature, the FWP method is less sensitive to mesh triangulation than the MMP and CH algorithms. We evaluate our FWP-based MMP and CH algorithms on a wide range of large-scale real-world models. Computational results show that our method can improve the speed by a factor of 3-10
How can non-technical end users effectively test their spreadsheets?
Purpose – An alarming number of spreadsheet faults have been reported in the literature, indicating that effective and easy-to-apply spreadsheet testing techniques are not available for “non-technical,” end-user programmers. The purpose of this paper is to alleviate the problem by introducing a metamorphic testing (MT) technique for spreadsheets. Design/methodology/approach – The paper discussed four common challenges encountered by end-user programmers when testing a spreadsheet. The MT technique was then discussed and how it could be used to solve the common challenges was explained. An experiment involving several “real-world” spreadsheets was performed to determine the viability and effectiveness of MT. Findings – The experiment confirmed that MT is highly effective in spreadsheet fault detection, and yet MT is a general technique that can be easily used by end-user programmers to test a large variety of spreadsheet applications. Originality/value – The paper provides a detailed discussion of some common challenges of spreadsheet testing encountered by end-user programmers. To the best of the authors knowledge, the paper is the first that includes an empirical study of how effective MT is in spreadsheet fault detection from an end-user programmer's perspective
Code coverage of adaptive random testing
Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the first software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher confidence in the reliability of the software under test even when no failure is detected
Towards Multi-class Object Detection in Unconstrained Remote Sensing Imagery
Automatic multi-class object detection in remote sensing images in
unconstrained scenarios is of high interest for several applications including
traffic monitoring and disaster management. The huge variation in object scale,
orientation, category, and complex backgrounds, as well as the different camera
sensors pose great challenges for current algorithms. In this work, we propose
a new method consisting of a novel joint image cascade and feature pyramid
network with multi-size convolution kernels to extract multi-scale strong and
weak semantic features. These features are fed into rotation-based region
proposal and region of interest networks to produce object detections. Finally,
rotational non-maximum suppression is applied to remove redundant detections.
During training, we minimize joint horizontal and oriented bounding box loss
functions, as well as a novel loss that enforces oriented boxes to be
rectangular. Our method achieves 68.16% mAP on horizontal and 72.45% mAP on
oriented bounding box detection tasks on the challenging DOTA dataset,
outperforming all published methods by a large margin (+6% and +12% absolute
improvement, respectively). Furthermore, it generalizes to two other datasets,
NWPU VHR-10 and UCAS-AOD, and achieves competitive results with the baselines
even when trained on DOTA. Our method can be deployed in multi-class object
detection applications, regardless of the image and object scales and
orientations, making it a great choice for unconstrained aerial and satellite
imagery.Comment: ACCV 201
How effectively does metamorphic testing alleviate the oracle problem?
In software testing, something which can verify the correctness of test case execution results is called an oracle. The oracle problem occurs when either an oracle does not exist, or exists but is too expensive to be used. Metamorphic testing is a testing approach which uses metamorphic relations, properties of the software under test represented in the form of relations among inputs and outputs of multiple executions, to help verify the correctness of a program. This paper presents new empirical evidence to support this approach, which has been used to alleviate the oracle problem in various applications and to enhance several software analysis and testing techniques. It has been observed that identification of a sufficient number of appropriate metamorphic relations for testing, even by inexperienced testers, was possible with a very small amount of training. Furthermore, the cost-effectiveness of the approach could be enhanced through the use of more diverse metamorphic relations. The empirical studies presented in this paper clearly show that a small number of diverse metamorphic relations, even those identified in an ad hoc manner, had a similar fault-detection capability to a test oracle, and could thus effectively help alleviate the oracle problem
ADVISE: Symbolism and External Knowledge for Decoding Advertisements
In order to convey the most content in their limited space, advertisements
embed references to outside knowledge via symbolism. For example, a motorcycle
stands for adventure (a positive property the ad wants associated with the
product being sold), and a gun stands for danger (a negative property to
dissuade viewers from undesirable behaviors). We show how to use symbolic
references to better understand the meaning of an ad. We further show how
anchoring ad understanding in general-purpose object recognition and image
captioning improves results. We formulate the ad understanding task as matching
the ad image to human-generated statements that describe the action that the ad
prompts, and the rationale it provides for taking this action. Our proposed
method outperforms the state of the art on this task, and on an alternative
formulation of question-answering on ads. We show additional applications of
our learned representations for matching ads to slogans, and clustering ads
according to their topic, without extra training.Comment: To appear, Proceedings of the European Conference on Computer Vision
(ECCV
Adaptive subspace sampling for class imbalance processing
© 2016 IEEE. This paper presents a novel oversampling technique that addresses highly imbalanced data distribution. At present, the imbalanced data that have anomalous class distribution and underrepresented data are difficult to deal with through a variety of conventional machine learning technologies. In order to balance class distributions, an adaptive subspace self-organizing map (ASSOM) that combines the local mapping scheme and globally competitive rule is proposed to artificially generate synthetic samples focusing on minority class samples. The ASSOM is conformed with feature-invariant characteristics, including translation, scaling and rotation, and it retains the independence of basis vectors in each module. Specifically, basis vectors generated via each ASSOM module can avoid generating repeated representative features that offer nothing but heavy computational load. Several experimental results demonstrate that the proposed ASSOM method with supervised learning manner is superior to other existing oversampling techniques
- …